Learning a priori constrained weighted majority votes
نویسندگان
چکیده
منابع مشابه
Lifelong Learning with Weighted Majority Votes
Better understanding of the potential benefits of information transfer and representation learning is an important step towards the goal of building intelligent systems that are able to persist in the world and learn over time. In this work, we consider a setting where the learner encounters a stream of tasks but is able to retain only limited information from each encountered task, such as a l...
متن کاملConsistency of weighted majority votes
We revisit the classical decision-theoretic problem of weighted expert voting from a statistical learning perspective. In particular, we examine the consistency (both asymptotic and finitary) of the optimal NitzanParoush weighted majority and related rules. In the case of known expert competence levels, we give sharp error estimates for the optimal rule. When the competence levels are unknown, ...
متن کاملLearning with Randomized Majority Votes
We propose algorithms for producing weighted majority votes that learn by probing the empirical risk of a randomized (uniformly weighted) majority vote—instead of probing the zero-one loss, at some margin level, of the deterministic weighted majority vote as it is often proposed. The learning algorithms minimize a risk bound which is convex in the weights. Our numerical results indicate that le...
متن کاملOption Decision Trees with Majority Votes
We describe an experimental study of Option Decision Trees with majority votes. Option Decision Trees generalize regular decision trees by allowing option nodes in addition to decision nodes; such nodes allow for several possible tests to be conducted instead of the commonly used single test. Our goal was to explore when option nodes are most useful and to control the growth of the trees so tha...
متن کاملDomain adaptation of weighted majority votes via perturbed variation-based self-labeling
We tackle the PAC-Bayesian Domain Adaptation (DA) problem [1]. This arrives when one desires to learn, from a source distribution, a good weighted majority vote (over a set of classifiers) on a different target distribution. In this context, the disagreement between classifiers is known crucial to control. In non-DA supervised setting, a theoretical bound – the C-bound [2] – involves this disag...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2014
ISSN: 0885-6125,1573-0565
DOI: 10.1007/s10994-014-5462-z